Optimal causal coding of general alphabet Markov Sources with entropy and channel input constraints

نویسندگان

  • Serdar Yüksel
  • Tamer Başar
  • Sean P. Meyn
چکیده

For Markov sources, the structure of optimal causal encoders minimizing the total communication rate subject to a mean-square distortion constraint is studied. Results on the nature of non-stationarity of the optimal quantizers are presented. The class of sources considered lives in a continuous alphabet, and the encoder is allowed to be variable-rate. Both the finitehorizon and the infinite-horizon problems are considered. In the finite-horizon case, the problem is non-convex, whereas in the infinite-horizon case the problem can be convexified under certain assumptions. For a finite horizon problem, a Lagrangian framework is adopted. Upon showing the existence of a Lagrange multiplier, it is shown that the optimal deterministic causal encoder for a kth-order Markov source uses only the most recent k source symbols and the information available at the receiver, whereas the optimal causal coder for a memoryless source is memoryless. For the infinite-horizon problem, a convex analytical approach is adopted. Randomized stationary quantizers are suboptimal in the absence of side information between the encoder and the decoder with regard to the randomization information, which will be referred to as common randomness. If there is common randomness, the optimal quantizer requires the randomization of at most two deterministic quantizers. In the absence of common randomness, the optimal quantizer is non-stationary and a recurrence-based time sharing of two deterministic quantizers is optimal. A linear source driven by Brownian disturbance is considered. If the process is stable, innovation coding is almost optimal at high-rates, whereas if the source is unstable, then even a high-rate time-invariant innovation coding scheme leads to an almost surely unstable estimation process.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Information Nonanticipative Rate Distortion Function and Its Applications

This paper investigates applications of nonanticipative Rate Distortion Function (RDF) in zero-delay Joint Source-Channel Coding (JSCC) based on excess distortion probability, in bounding the Optimal Performance Theoretically Attainable (OPTA) by noncausal and causal codes, and in computing the Rate Loss (RL) of zero-delay and causal codes with respect to noncausal codes. These applications are...

متن کامل

Relative Entropy Rate between a Markov Chain and Its Corresponding Hidden Markov Chain

 In this paper we study the relative entropy rate between a homogeneous Markov chain and a hidden Markov chain defined by observing the output of a discrete stochastic channel whose input is the finite state space homogeneous stationary Markov chain. For this purpose, we obtain the relative entropy between two finite subsequences of above mentioned chains with the help of the definition of...

متن کامل

Communication over Channels with Causal Side Information at the Transmitter

This work deals with communication over the AWGN channel with additive discrete interference, where the sequence of interference symbols is known causally at the transmitter. We use Shannon’s treatment for channels with side information at the transmitter as a framework to derive “optimal precoding” and “channel code design criterion” for the channel with known interference at the transmitter. ...

متن کامل

Separation Theorems for Phase-Incoherent Multiple-User Channels

We study the transmission of two correlated and memoryless sources (U, V ) over several multipleuser phase asynchronous channels. Namely, we consider a class of phase-incoherent multiple access relay channels (MARC) with both non-causal and causal unidirectional cooperation between encoders, referred to as phase-incoherent unidirectional non-causal cooperative MARC (PI-UNCC-MARC), and phase-inc...

متن کامل

Taylor Expansion for the Entropy Rate of Hidden Markov Chains

We study the entropy rate of a hidden Markov process, defined by observing the output of a symmetric channel whose input is a first order Markov process. Although this definition is very simple, obtaining the exact amount of entropy rate in calculation is an open problem. We introduce some probability matrices based on Markov chain's and channel's parameters. Then, we try to obtain an estimate ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007